Note: Please inquire about up-to-date Copyright terms
Contents
- Introduction to Monitor journalism
- Copyright and terms of acceptance
- Targeting your written work
- The Monitor’s perspective on artificial intelligence (AI) and its use
Copyright information and terms of acceptance
Generally, The Christian Science Monitor (“Monitor”) accepts work from new writers "on spec." That means you give us the opportunity to read your story before we decide whether to accept it. Our agreeing to look at something on spec implies no financial or other obligation on our part, unless we decide to accept the story for publication. We try to render verdicts on stories quickly, but we are often inundated, and you should feel free to pester us for an answer on a perishable story. Please review the specific guidelines posted by the editor of the section to which you are pitching a story (for editor contacts and guidelines, see below on Targeting Your Written Work). And please be sure that you are able to accept the terms outlined below before you submit a story.
When you file a story with the Monitor, it is assumed that the piece is your original work and that you will retain ownership of the copyright. However, we need you to license exclusive rights to us for 90 days worldwide in all media from the date of first publication. This includes, among other things, the right to distribute the story via aggregation and syndication, including The Christian Science Monitor News Service and The New York Times Syndicate, which provide Monitor stories in English and other languages to client news organizations in the United States and abroad. This also includes publishing the story in all the editions of the Monitor, as well as posting the story on the Monitor's website and social media platforms.
To see the full rights we need to be able to publish your story, please ask the editor to see our standard freelance contributor rights agreement template. Once your story has been accepted, we will send you the agreement for signature via our online signature portal. We cannot publish your work without the signed agreement. We also need to know if you have submitted the story to other news outlets, in order to avoid conflicting publication elsewhere.
If we commission you to write stories (usually after we have published several of your submissions), there is a financial obligation on our part. If you file a commissioned story that fulfills what you pitched to us, we will pay you our basic rate for the story whether we run it or not. If the commissioned story you deliver is unsatisfactory, we will ask you to rework it or we will pay a kill fee, usually half the basic rate. We may not pay a fee, however, if the story arrives too late for avoidable reasons.
It's important that you and your editor clarify whether we are commissioning a story or asking to see a story on spec.
Targeting your written work
- International news
- National news
- The Home Forum
- A Christian Science Perspective
- Books
- Letters to the Editor
- People Making A Difference
PLEASE NOTE: We no longer have an op-ed page and are not accepting unsolicited opinion pieces.
The Monitor’s perspective on artificial intelligence (AI) and its use
The Christian Science Monitor always has been and always will be produced by people, for people. People write and edit all of our stories. This technology is no substitute for human interviewing or on-the-ground reporting, or for editing.
Artificial intelligence is making rapid strides and can assist us in some aspects of our work. We might use AI – selectively and with careful verification – as a tool to help find legitimate experts and research, to generate transcripts of our recorded interviews, to check spelling and adherence to grammar rules, or to turn original text scripts into directly matching audio.
We recognize how ubiquitous some forms of AI have already become – as a built-in feature in personal devices, or as a feature of widely used third-party search tools. We understand that a hardline ban on AI is neither practical nor productive.
But the technology is not conscious in the way people are, has no sense of ethics or morality, and can neither reflect nor perceive what we at the Monitor understand as real intelligence. AI has been known to take, as source material, others’ proprietary work, incorporate misinformation, and discriminate against those outside the mainstream.
That is why generative AI has no creative role at the Monitor.
We will continue to monitor and assess new uses of AI as they emerge, to be mindful of security and accuracy when using third-party services of any kind, and to update our use guidelines as the technology evolves. As journalists and protectors of truth, we will diligently report on the myriad ways AI can be misused.
Here’s an FAQ addressing current guidelines for staff and contributors (updated October 23, 2024):
As a staff or freelance writer contributing to the Monitor, what should I view as permissible use of AI in researching and preparing a story?
AI cannot be used to write any part of a story. Doing so is grounds for severing the relationship. It is a tool, not a crutch. Using AI chatbots to jumpstart research about a topic and generate lists of potential sources is permissible. So is the use of AI to transcribe recorded interviews and check the spelling and grammar of copy. But your reporting must verify anything a chatbot says. You read the reports it references. You interview the sources it points you to.
If you use a quote from a machine-generated transcription, you need to double-check it against the recording. And you write the article. We publish no machine-written stories or social media posts. The editors need to know there’s a person, using human judgment and creativity, who stands between machine output and the copy you send them.
How transparent do I need to be with my editor about such use, and at what point?
Be fully transparent if you have used the technology in any way, including ones that are, to your thinking, permissible.
As an editor, what should I see as legitimate uses of AI? Can I use it to copy edit for style? To create story summaries (with reader-facing transparency)? To suggest headlines or display copy (for SEO purposes, for example)?
AI can be used as a tool for editors, similar to the way it can serve writers. Editors can use generative AI tools (such as ChatGPT) for research, assisting with finding datasets, or to identify potential sources for reporters. Editors should, as a best practice, use multiple chatbots to compare answers – avoiding overreliance on one tool – and should fact check what they find through AI (including via AI-provided links). Our copy chief encourages writers’ use of spell-check programs as a first screen. Copy edits will continue to be handled by our dedicated copy desk.
AI should not be used to write headlines or display copy. Developing headlines is an act of writing and our readers should be assured that humans rather than machines are choosing what to highlight from a story. AI should not be used to create story summaries.
What about AI use in photography – or video – that I’m submitting for publication?
No use is permitted. Photographs have an immediate and indelible impact on viewers and are assumed to be representations of reality – a moment in time. Generative AI in creating images for publication – or even in writing captions – is unacceptable.
As an illustrator, I see prompt-generation as part of my ideation process. Can I work with AI to begin the process of creating art for submission?
No. Generative AI is a no-go for any user-facing designs, for illustrations and graphics. It’s nearly impossible to figure out the provenance of any image. AI has been known to draw from copyrighted works. We strongly favor human control here.
What about feeding data to AI in the interest of having it assist with data visualization?
We of course use software tools to help us create graphics. But here, too, the black box nature of generative AI makes it nearly impossible to tell if a large-language model is inserting factual errors that would then be passed along to our readers.
What's the Monitor’s position on generating and editing audio?
Digital transcription – audio to text – is not new. AI tools can speed that work. But AI today can also perform, for example, “voice cloning” based on samples of speech, and generate what sounds to be spoken content never actually voiced by the speaker. Voice cloning is not permitted.